Label-Dependencies Aware Recurrent Neural Networks
نویسندگان
چکیده
In the last few years, Recurrent Neural Networks (RNNs) have proved effective on several NLP tasks. Despite such great success, their ability to model sequence labeling is still limited. This lead research toward solutions where RNNs are combined with models which already proved effective in this domain, such as CRFs. In this work we propose a solution far simpler but very effective: an evolution of the simple Jordan RNN, where labels are re-injected as input into the network, and converted into embeddings, in the same way as words. We compare this RNN variant to all the other RNN models, Elman and Jordan RNN, LSTM and GRU, on two well-known tasks of Spoken Language Understanding (SLU). Thanks to label embeddings and their combination at the hidden layer, the proposed variant, which uses more parameters than Elman and Jordan RNNs, but far fewer than LSTM and GRU, is more effective than other RNNs, but also outperforms sophisticated CRF
منابع مشابه
Recurrent Neural Network Structured Output Prediction for Spoken Language Understanding
Recurrent Neural Networks (RNNs) have been widely used for sequence modeling due to their strong capabilities in modeling temporal dependencies. In this work, we study and evaluate the effectiveness of using RNNs for slot filling, a key task in Spoken Language Understanding (SLU), with special focus on modeling label sequence dependencies. Recent work on slot filling using RNNs did not model la...
متن کاملLabel-Dependency Coding in Simple Recurrent Networks for Spoken Language Understanding
Modeling target label dependencies is important for sequence labeling tasks. This may become crucial in the case of Spoken Language Understanding (SLU) applications, especially for the slot-filling task where models have to deal often with a high number of target labels. Conditional Random Fields (CRF) were previously considered as the most efficient algorithm in these conditions. More recently...
متن کاملRecurrent Attentional Reinforcement Learning for Multi-label Image Recognition
Recognizing multiple labels of images is a fundamental but challenging task in computer vision, and remarkable progress has been attained by localizing semantic-aware image regions and predicting their labels with deep convolutional neural networks. The step of hypothesis regions (region proposals) localization in these existing multi-label image recognition pipelines, however, usually takes re...
متن کاملTable Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction
This paper proposes a novel context-aware joint entity and word-level relation extraction approach through semantic composition of words, introducing a Table Filling Multi-Task Recurrent Neural Network (TF-MTRNN) model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies. The proposed neural network architecture is c...
متن کاملMulti-Label Image Classification with Regional Latent Semantic Dependencies
Deep convolution neural networks (CNN) have demonstrated advanced performance on single-label image classification, and various progress also have been made to apply CNN methods on multi-label image classification, which requires to annotate objects, attributes, scene categories etc. in a single shot. Recent state-of-the-art approaches to multi-label image classification exploit the label depen...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1706.01740 شماره
صفحات -
تاریخ انتشار 2017